1-Consciousness-Sense-Vision-Physiology-Depth Perception

depth perception

Brain can find depth and distance {depth perception} {distance perception} in scenes, paintings, and photographs.

depth: closeness

Closer objects have higher edge contrast, more edge sharpness, position nearer scene bottom, larger size, overlap on top, and transparency. Higher edge contrast is most important. More edge sharpness is next most important. Position nearer scene bottom is more important for known eye-level. Transparency is least important. Nearer objects are redder.

depth: farness

Farther objects have smaller retinal size; are closer to horizon (if below horizon, they are higher than nearer objects); have lower contrast; are hazier, blurrier, and fuzzier with less texture details; and are bluer or greener. Nearer objects overlap farther objects and cast shadows on farther objects.

binocular depth cue: convergence

Focusing on near objects causes extraocular muscles to turn eyeballs toward each other, and kinesthesia sends this feedback to vision system. More tightening and stretching means nearer. Objects farther than ten meters cause no muscle tightening or stretching, so convergence information is useful only for distances less than ten meters.

binocular depth cue: shadow stereopsis

For far objects, with very small retinal disparity, shadows can still have perceptibly different angles {shadow stereopsis} [Puerta, 1989], so larger angle differences are nearer, and smaller differences are farther.

binocular depth cue: stereopsis

If eye visual fields overlap, the two scenes differ by a linear displacement, due to different sight-line angles. For a visual feature, displacement is the triangle base, which has angles at each end between the displacement line and sight-line, allowing triangulation to find distance. At farther distances, displacement is smaller and angle differences from 90 degrees are smaller, so distance information is imprecise.

binocular depth cue: inference

Inference includes objects at edges of retinal overlap in stereo views.

monocular depth cue: aerial perspective

Higher scene contrast means nearer, and lower contrast means farther. Bluer means farther, and redder means nearer.

monocular depth cue: accommodation

Focusing on near objects causes ciliary muscles to tighten to increase lens curvature, and kinesthesia sends this feedback to vision system. More tightening and stretching means nearer. Objects farther than two meters cause no muscle tightening or stretching, so accommodation information is useful only for distances less than two meters.

monocular depth cue: blur

More blur means farther, and less blur means nearer.

monocular depth cue: color saturation

Bluer objects are farther, and redder objects are nearer.

monocular depth cue: color temperature

Bluer objects are farther, and redder objects are nearer.

monocular depth cue: contrast

Higher scene contrast means nearer, and lower contrast means farther. Edge contrast, edge sharpness, overlap, and transparency depend on contrast.

monocular depth cue: familiarity

People can have previous experience with objects and their size, so larger retinal size is closer, and smaller retinal size is farther.

monocular depth cue: fuzziness

Fuzzier objects are farther, and clearer objects are nearer.

monocular depth cue: haziness

Hazier objects are farther, and clearer objects are nearer.

monocular depth cue: height above and below horizon

Objects closer to horizon are farther, and objects farther from horizon are nearer. If object is below horizon, higher objects are farther, and lower objects are nearer. If object is above horizon, lower objects are farther, and higher objects are nearer.

monocular depth cue: kinetic depth perception

Objects becoming larger are moving closer, and objects becoming smaller are moving away {kinetic depth perception}. Kinetic depth perception is the basis for judging time to collision.

monocular depth cue: lighting

Light and shade have contours. Light is typically above objects. Light typically falls on nearer objects.

monocular depth cue: motion parallax

While looking at an object, if observer moves, other objects moving backwards are nearer than object, and other objects moving forwards are farther than object. For the farther objects, objects moving faster are nearer, and objects moving slower are farther. For the nearer objects, objects moving faster are nearer, and objects moving slower are farther. Some birds use head bobbing to induce motion parallax. Squirrels move orthogonally to objects. While observer moves while looking straight ahead, objects moving backwards faster are closer, and objects moving backwards slower are farther.

monocular depth cue: occlusion

Objects that overlap other objects {interposition} are nearer, and objects behind other objects are farther {pictorial depth cue}. Objects with occluding contours are farther.

monocular depth cue: peripheral vision

At the visual periphery, parallel lines curve, like the effect of a fish eye lens, framing the visual field.

monocular depth cue: perspective

By linear perspective, parallel lines converge, so, for same object, smaller size means farther distance.

monocular depth cue: relative movement

If objects physically move at same speed, objects moving slower are farther, and objects moving faster are nearer, to a stationary observer.

monocular depth cue: relative size

If two objects have the same shape and are judged to be the same, object with larger retinal size is closer.

monocular depth cue: retinal size

If observer has previous experience with object size, object retinal size allows calculating distance.

monocular depth cue: shading

Light and shade have contours. Shadows are typically below objects. Shade typically falls on farther objects.

monocular depth cue: texture gradient

Senses can detect gradients by difference ratios. Less fuzzy and larger surface-texture sizes and shapes are nearer, and more fuzzy and smaller are farther. Bluer and hazier surface texture is farther, and redder and less hazy surface texture is closer.

properties: precision

Depth-calculation accuracy and precision are low.

properties: rotation

Fixed object appears to revolve around eye if observer moves.

factors: darkness

In the dark, objects appear closer.

processes: learning

People learn depth perception and can lose depth-perception abilities.

processes: coordinates

Binocular depth perception requires only ground plane and eye point to establish coordinate system. Perhaps, sensations aid depth perception by building geometric images [Poggio and Poggio, 1984].

processes: two-and-one-half dimensions

ON-center-neuron, OFF-center-neuron, and orientation-column intensities build two-dimensional line arrays, then two-and-one-half-dimensional contour arrays, and then three-dimensional surfaces and texture arrays [Marr, 1982].

processes: three dimensions

Brain derives three-dimensional images from two-dimensional ones by assigning convexity and concavity to lines and vertices and making convexities and concavities consistent.

processes: triangulation model

Animals continually track distances and directions to distinctive landmarks.

continuity constraint

Adjacent points not at edges are on same surface and so at same distance {continuity constraint, depth}.

corresponding retinal points

Scenes land on right and left eye with same geometric shape, so feature distances and orientations are the same {corresponding retinal points}.

cyclopean stimulus

Brain stimuli {cyclopean stimulus} can result only from binocular disparity.

distance ratio

One eye can find object-size to distance ratio {distance ratio} {geometric depth}, using three object points. See Figure 1.

Eye fixates on object center point, edge point, and opposite-edge point. Assume object is perpendicular to sightline. Assume retina is planar. Assume that eye is spherical, rotates around center, and has calculable radius.

Light rays go from center point, edge point, and opposite edge point to retina. Using kinesthetic and touch systems and motor cortex, brain knows visual angles and retinal distances. Solving equations can find object-size to distance ratio.

When eye rotates, scenes do not change, except for focus. See Figure 2. 3.

Calculating distances to space points

Vision cone receptors receive from a circular area of space that subtends one minute of arc (Figure 3). Vision neurons receive from a circular area of space that subtends one minute to one degree of arc.

To detect distance, neuron arrays receive from a circular area of space that subtends one degree of arc (Figure 4). For the same angle, circular surfaces at farther distances have longer diameters, bigger areas, and smaller circumference curvature.

Adjacent neuron arrays subtend the same visual angle and have retinal (and cortical) overlap (Figure 5). Retinal and cortical neuron-array overlap defines a constant length. Constant-length retinal-image size defines the subtended visual angle, which varies inversely with distance, allowing calculating distance (r = s / A) in one step.

Each neuron array sends to a register for a unique spatial direction. The register calculates distance and finds color. Rather than use multiple registers at multiple locations, as in neural networks or holography, a single register can place a color at the calculated distance in the known direction. There is one register for each direction and distance. Registers are not physical neuron conglomerations but functional entities.

divergence of eyes

Both eyes can turn outward {divergence, eye}, away from each other, as objects get farther. If divergence is successful, there is no retinal disparity.

Emmert law

Brain expands more distant objects in proportion to the more contracted retinal-image size, making apparent size increase with increasing distance {size-constancy scaling} {Emmert's law} {Emmert law}. Brain determines size-constancy scaling by eye convergence, geometric perspective, texture gradients, and image sharpness. Texture gradients decrease in size with distance. Image sharpness decreases with distance.

triangulation by eye

Two eyes can measure relative distance to scene point, using geometric triangulation {triangulation, eye}. See Figure 1.

comparison

Comparing triangulations from two different distances does not give more information. See Figure 2.

movement

Moving eye sideways while tracking scene point can calculate distance from eye to point, using triangulation. See Figure 3.

Moving eye sideways while tracking scene points calibrates distances, because other scene points travel across retina. See Figure 4.

Moving eye from looking at object edge to looking at object middle can determine scene-point distance. See Figure 5.

Moving eye from looking at object edge to looking at object other edge at same distance can determine scene-point distance. See Figure 6.

uniqueness constraint

Scene features land on one retina point {uniqueness constraint, depth}, so brain stereopsis can match right-retina and left-retina scene points.

1-Consciousness-Sense-Vision-Physiology-Depth Perception-Cue

depth cue

Various features {depth cue}| {cue, depth} signal distance. Depth cues are accommodation, colors, color saturation, contrast, fuzziness, gradients, haziness, distance below horizon, linear perspective, movement directions, occlusions, retinal disparities, shadows, size familiarity, and surface textures.

types

Non-metrical depth cues can show relative depth, such as object blocking other-object view. Metrical depth cues can show quantitative information about depth. Absolute metrical depth cues can show absolute distance by comparison, such as comparing to nose size. Relative metrical depth cues can show relative distance by comparison, such as twice as far away.

aerial perspective

Vision has less resolution at far distances. Air has haze, smoke, and dust, which absorb redder light, so farther objects are bluer, have less light intensity, and have blurrier edges {aerial perspective}| than if air were transparent. (Air scatters blue more than red, but this effect is small except for kilometer distances.)

binocular depth cue

Brain perceives depth using scene points that stimulate right and left eyes differently {binocular depth cue} {binocular depth perception}. Eye convergences, retinal disparities, and surface-area sizes have differences.

surface area size

Brain can judge distance by overlap, total scene area, and area-change rate. Looking at surfaces, eyes see semicircles. See Figure 1. Front edge is semicircle diameter, and vision field above that line is semicircle half-circumference. For two eyes, semicircles overlap in middle. Closer surfaces make overlap less, and farther surfaces make overlap more. Total scene surface area is more for farther surfaces and less for closer surfaces. Movement changes perceived area at rate that depends on distance. Closer objects have faster rates, and farther objects have slower rates.

convergence of eyes

For fixation, both eyes turn toward each other {convergence, eye} {eye convergence} when objects are nearer than 10 meters. If convergence is successful, there is no retinal disparity. Greater eye convergence means object is closer, and lesser eye convergence means object is farther. See Figure 1.

intensity difference during movement

Brain can judge surface relative distance by intensity change during movement toward and away from surface {intensity difference during movement}. See Figure 1.

moving closer

Moving from point to half that distance increases intensity four times, because eye gathers four times more light at closer radius.

moving away

Moving from point to double that distance decreases intensity four times, because eye gathers four times less light at farther radius.

moving sideways

Movement side to side and up and down changes intensity slightly by changing distance slightly. Perhaps, saccades and/or eyeball oscillations help determine distances.

memory

Experience with constant-intensity objects establishes distances.

accommodation

Looking at object while moving it or eye closer, or farther, causes lens-muscle tightening, or loosening, and makes more, or less, visual angle. If brain knows depth, movement toward and away can measure source intensity.

light ray

Scene points along same light ray project to same retina point. See Figure 2.

haze

Atmospheric haze affects light intensity. Haze decreases intensity proportionally with distance. Object twice as far away has half the intensity, because it encounters twice as many haze particles.

sound

Sound-intensity changes can find distances. Bats use sonar because it is too dark to see at night. Dolphins use sonar because water distorts light.

monocular depth cue

One eye can perceive depth {monocular depth cue}. Monocular depth cues are accommodation, aerial perspective, color, color saturation, edge, monocular movement parallax, occlusion, overlap, shadows, and surface texture.

occlusion and depth

Closer object can hide farther object {occlusion, cue}|. Perception knows many rules about occlusion.

stereoscopic depth

Using both eyes can make depth and three dimensions appear {stereoscopic depth} {stereoscopy} {stereopsis}. Stereopsis aids random shape perception. Stereoscopic data analysis is independent of other visual analyses. Monocular depth cues can cancel stereoscopic depth. Stereoscopy does not allow highly unlikely depth reversals or unlikely depths.

texture gradient

Features farther away are smaller than when closer, so surfaces have larger texture nearby and smaller texture farther away {texture gradient}.

Related Topics in Table of Contents

1-Consciousness-Sense-Vision-Physiology

Drawings

Drawings

Contents and Indexes of Topics, Names, and Works

Outline of Knowledge Database Home Page

Contents

Glossary

Topic Index

Name Index

Works Index

Searching

Search Form

Database Information, Disclaimer, Privacy Statement, and Rights

Description of Outline of Knowledge Database

Notation

Disclaimer

Copyright Not Claimed

Privacy Statement

References and Bibliography

Consciousness Bibliography

Technical Information

Date Modified: 2022.0225